onboard computer
CHILD (Controller for Humanoid Imitation and Live Demonstration): a Whole-Body Humanoid Teleoperation System
Myers, Noboru, Kwon, Obin, Yamsani, Sankalp, Kim, Joohyung
Abstract-- Recent advances in teleoperation have demonstrated robots performing complex manipulation tasks. However, existing works rarely support whole-body joint-level teleoperation for humanoid robots, limiting the diversity of tasks that can be accomplished. This work presents Controller for Humanoid Imitation and Live Demonstration (CHILD), a compact reconfigurable teleoperation system that enables joint level control over humanoid robots. CHILD fits within a standard baby carrier, allowing the operator control over all four limbs, and supports both direct joint mapping for full-body control and loco-manipulation. Adaptive force feedback is incorporated to enhance operator experience and prevent unsafe joint movements. I. INTRODUCTION Teleoperation is a commonly used technique to bridge the gap between robots' current autonomous and physical capabilities. More recently, teleoperation has become a popular method to collect demonstration data for learning-based policies.
- North America > United States > Illinois > Champaign County > Urbana (0.04)
- North America > United States > Illinois > Champaign County > Champaign (0.04)
I Went Birding With the World's First AI-Powered Binoculars
The Austrian company Swarovski Optik has been innovating in long-range optical instruments for 75 years. Now, like so many other well-established companies these days, it's dipping into the world of artificial intelligence to enhance its latest product. Earlier this year, the company released the AX Visio, a pair of AI-powered binoculars codeveloped with the famed Australian industrial designer Marc Newson. These are the world's first pair of AI binoculars, the company claims, with an onboard computer that can identify nearly any bird you point them at. They have a built-in camera and use computer vision software to ID over 9,000 bird species in real time.
Assisted Physical Interaction: Autonomous Aerial Robots with Neural Network Detection, Navigation, and Safety Layers
Berra, Andrea, Sankaranarayanan, Viswa Narayanan, Seisa, Achilleas Santi, Mellet, Julien, Gamage, Udayanga G. W. K. N., Satpute, Sumeet Gajanan, Ruggiero, Fabio, Lippiello, Vincenzo, Tolu, Silvia, Fumagalli, Matteo, Nikolakopoulos, George, Soto, Miguel Ángel Trujillo, Heredia, Guillermo
The paper introduces a novel framework for safe and autonomous aerial physical interaction in industrial settings. It comprises two main components: a neural network-based target detection system enhanced with edge computing for reduced onboard computational load, and a control barrier function (CBF)-based controller for safe and precise maneuvering. The target detection system is trained on a dataset under challenging visual conditions and evaluated for accuracy across various unseen data with changing lighting conditions. Depth features are utilized for target pose estimation, with the entire detection framework offloaded into low-latency edge computing. The CBF-based controller enables the UAV to converge safely to the target for precise contact. Simulated evaluations of both the controller and target detection are presented, alongside an analysis of real-world detection performance.
- North America > Costa Rica > Heredia Province > Heredia (0.04)
- Europe > Spain > Andalusia > Seville Province > Seville (0.04)
- Europe > Denmark (0.04)
- (2 more...)
- Transportation (0.93)
- Government > Military (0.46)
An Open-Source Soft Robotic Platform for Autonomous Aerial Manipulation in the Wild
Bauer, Erik, Blöchlinger, Marc, Strauch, Pascal, Raayatsanati, Arman, Cavelti, Curdin, Katzschmann, Robert K.
Aerial manipulation combines the versatility and speed of flying platforms with the functional capabilities of mobile manipulation, which presents significant challenges due to the need for precise localization and control. Traditionally, researchers have relied on offboard perception systems, which are limited to expensive and impractical specially equipped indoor environments. In this work, we introduce a novel platform for autonomous aerial manipulation that exclusively utilizes onboard perception systems. Our platform can perform aerial manipulation in various indoor and outdoor environments without depending on external perception systems. Our experimental results demonstrate the platform's ability to autonomously grasp various objects in diverse settings. This advancement significantly improves the scalability and practicality of aerial manipulation applications by eliminating the need for costly tracking solutions. To accelerate future research, we open source our ROS 2 software stack and custom hardware design, making our contributions accessible to the broader research community.
- Transportation > Air (0.47)
- Aerospace & Defense (0.46)
- Information Technology (0.46)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Robots > Autonomous Vehicles > Drones (0.46)
Narrowing your FOV with SOLiD: Spatially Organized and Lightweight Global Descriptor for FOV-constrained LiDAR Place Recognition
Kim, Hogyun, Choi, Jiwon, Sim, Taehu, Kim, Giseop, Cho, Younggun
We often encounter limited FOV situations due to various factors such as sensor fusion or sensor mount in real-world robot navigation. However, the limited FOV interrupts the generation of descriptions and impacts place recognition adversely. Therefore, we suffer from correcting accumulated drift errors in a consistent map using LiDAR-based place recognition with limited FOV. Thus, in this paper, we propose a robust LiDAR-based place recognition method for handling narrow FOV scenarios. The proposed method establishes spatial organization based on the range-elevation bin and azimuth-elevation bin to represent places. In addition, we achieve a robust place description through reweighting based on vertical direction information. Based on these representations, our method enables addressing rotational changes and determining the initial heading. Additionally, we designed a lightweight and fast approach for the robot's onboard autonomy. For rigorous validation, the proposed method was tested across various LiDAR place recognition scenarios (i.e., single-session, multi-session, and multi-robot scenarios). To the best of our knowledge, we report the first method to cope with the restricted FOV. Our place description and SLAM codes will be released. Also, the supplementary materials of our descriptor are available at \texttt{\url{https://sites.google.com/view/lidar-solid}}.
Onboard dynamic-object detection and tracking for autonomous robot navigation with RGB-D camera
Xu, Zhefan, Zhan, Xiaoyang, Xiu, Yumeng, Suzuki, Christopher, Shimada, Kenji
Deploying autonomous robots in crowded indoor environments usually requires them to have accurate dynamic obstacle perception. Although plenty of previous works in the autonomous driving field have investigated the 3D object detection problem, the usage of dense point clouds from a heavy Light Detection and Ranging (LiDAR) sensor and their high computation cost for learning-based data processing make those methods not applicable to small robots, such as vision-based UAVs with small onboard computers. To address this issue, we propose a lightweight 3D dynamic obstacle detection and tracking (DODT) method based on an RGB-D camera, which is designed for low-power robots with limited computing power. Our method adopts a novel ensemble detection strategy, combining multiple computationally efficient but low-accuracy detectors to achieve real-time high-accuracy obstacle detection. Besides, we introduce a new feature-based data association and tracking method to prevent mismatches utilizing point clouds' statistical features. In addition, our system includes an optional and auxiliary learning-based module to enhance the obstacle detection range and dynamic obstacle identification. The proposed method is implemented in a small quadcopter, and the results show that our method can achieve the lowest position error (0.11m) and a comparable velocity error (0.23m/s) across the benchmarking algorithms running on the robot's onboard computer. The flight experiments prove that the tracking results from the proposed method can make the robot efficiently alter its trajectory for navigating dynamic environments. Our software is available on GitHub as an open-source ROS package.
Mobile Manipulation Platform for Autonomous Indoor Inspections in Low-Clearance Areas
Pearson, Erik, Szenher, Paul, Huang, Christine, Englot, Brendan
Mobile manipulators have been used for inspection, maintenance and repair tasks over the years, but there are some key limitations. Stability concerns typically require mobile platforms to be large in order to handle far-reaching manipulators, or for the manipulators to have drastically reduced workspaces to fit onto smaller mobile platforms. Therefore we propose a combination of two widely-used robots, the Clearpath Jackal unmanned ground vehicle and the Kinova Gen3 six degree-of-freedom manipulator. The Jackal has a small footprint and works well in low-clearance indoor environments. Extensive testing of localization, navigation and mapping using LiDAR sensors makes the Jackal a well developed mobile platform suitable for mobile manipulation. The Gen3 has a long reach with reasonable power consumption for manipulation tasks. A wrist camera for RGB-D sensing and a customizable end effector interface makes the Gen3 suitable for a myriad of manipulation tasks. Typically these features would result in an unstable platform, however with a few minor hardware and software modifications, we have produced a stable, high-performance mobile manipulation platform with significant mobility, reach, sensing, and maneuverability for indoor inspection tasks, without degradation of the component robots' individual capabilities. These assertions were investigated with hardware via semi-autonomous navigation to waypoints in a busy indoor environment, and high-precision self-alignment alongside planar structures for intervention tasks.
- North America > United States > New York (0.04)
- North America > United States > New Jersey > Hudson County > Hoboken (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
StereoVoxelNet: Real-Time Obstacle Detection Based on Occupancy Voxels from a Stereo Camera Using Deep Neural Networks
Li, Hongyu, Li, Zhengang, Akmandor, Neset Unver, Jiang, Huaizu, Wang, Yanzhi, Padir, Taskin
Obstacle detection is a safety-critical problem in robot navigation, where stereo matching is a popular vision-based approach. While deep neural networks have shown impressive results in computer vision, most of the previous obstacle detection works only leverage traditional stereo matching techniques to meet the computational constraints for real-time feedback. This paper proposes a computationally efficient method that employs a deep neural network to detect occupancy from stereo images directly. Instead of learning the point cloud correspondence from the stereo data, our approach extracts the compact obstacle distribution based on volumetric representations. In addition, we prune the computation of safety irrelevant spaces in a coarse-to-fine manner based on octrees generated by the decoder. As a result, we achieve real-time performance on the onboard computer (NVIDIA Jetson TX2). Our approach detects obstacles accurately in the range of 32 meters and achieves better IoU (Intersection over Union) and CD (Chamfer Distance) scores with only 2% of the computation cost of the state-of-the-art stereo model. Furthermore, we validate our method's robustness and real-world feasibility through autonomous navigation experiments with a real robot. Hence, our work contributes toward closing the gap between the stereo-based system in robot perception and state-of-the-art stereo models in computer vision. To counter the scarcity of high-quality real-world indoor stereo datasets, we collect a 1.36 hours stereo dataset with a mobile robot which is used to fine-tune our model. The dataset, the code, and further details including additional visualizations are available at https://lhy.xyz/stereovoxelnet
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Europe > Germany > Baden-Württemberg > Freiburg (0.04)
- Asia > Japan > Honshū > Kansai > Hyogo Prefecture > Kobe (0.04)
Athletic Mobile Manipulator System for Robotic Wheelchair Tennis
Zaidi, Zulfiqar, Martin, Daniel, Belles, Nathaniel, Zakharov, Viacheslav, Krishna, Arjun, Lee, Kin Man, Wagstaff, Peter, Naik, Sumedh, Sklar, Matthew, Choi, Sugju, Kakehi, Yoshiki, Patil, Ruturaj, Mallemadugula, Divya, Pesce, Florian, Wilson, Peter, Hom, Wendell, Diamond, Matan, Zhao, Bryan, Moorman, Nina, Paleja, Rohan, Chen, Letian, Seraj, Esmaeil, Gombolay, Matthew
Athletics are a quintessential and universal expression of humanity. From French monks who in the 12th century invented jeu de paume, the precursor to modern lawn tennis, back to the K'iche' people who played the Maya Ballgame as a form of religious expression over three thousand years ago, humans have sought to train their minds and bodies to excel in sporting contests. Advances in robotics are opening up the possibility of robots in sports. Yet, key challenges remain, as most prior works in robotics for sports are limited to pristine sensing environments, do not require significant force generation, or are on miniaturized scales unsuited for joint human-robot play. In this paper, we propose the first open-source, autonomous robot for playing regulation wheelchair tennis. We demonstrate the performance of our full-stack system in executing ground strokes and evaluate each of the system's hardware and software components. The goal of this paper is to (1) inspire more research in human-scale robot athletics and (2) establish the first baseline for a reproducible wheelchair tennis robot for regulation singles play. Our paper contributes to the science of systems design and poses a set of key challenges for the robotics community to address in striving towards robots that can match human capabilities in sports.
- North America > United States > New York > New York County > New York City (0.04)
- Europe > Sweden (0.04)
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- Europe > Switzerland > Basel-City > Basel (0.04)
Visual Servoing Approach for Autonomous UAV Landing on a Moving Vehicle
Keipour, Azarakhsh, Pereira, Guilherme A. S., Bonatti, Rogerio, Garg, Rohit, Rastogi, Puru, Dubey, Geetesh, Scherer, Sebastian
Many aerial robotic applications require the ability to land on moving platforms, such as delivery trucks and marine research boats. We present a method to autonomously land an Unmanned Aerial Vehicle on a moving vehicle. A visual servoing controller approaches the ground vehicle using velocity commands calculated directly in image space. The control laws generate velocity commands in all three dimensions, eliminating the need for a separate height controller. The method has shown the ability to approach and land on the moving deck in simulation, indoor and outdoor environments, and compared to the other available methods, it has provided the fastest landing approach. Unlike many existing methods for landing on fast-moving platforms, this method does not rely on additional external setups, such as RTK, motion capture system, ground station, offboard processing, or communication with the vehicle, and it requires only the minimal set of hardware and localization sensors. The videos and source codes are also provided.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.14)
- South America > Brazil (0.04)
- North America > United States > West Virginia > Monongalia County > Morgantown (0.04)
- (7 more...)
- Aerospace & Defense > Aircraft (0.67)
- Transportation > Ground > Road (0.66)
- Information Technology > Robotics & Automation (0.49)
- Transportation > Air (0.47)